Back to index

What Computers Can’t Do: A Critique of Artificial Reason

Tags: #technology #ai #philosophy #cognitive science #artificial intelligence

Authors: Hubert L. Dreyfus

Overview

In ‘What Computers Can’t Do: A Critique of Artificial Reason,’ I delve into the philosophical foundations of artificial intelligence (AI) and challenge the prevailing optimism surrounding its potential. My central argument is that human intelligence is not reducible to rule-based symbol manipulation, as assumed by much of the AI research community. This assumption, rooted in a long philosophical tradition, is manifest in four key assumptions I critique: the biological assumption (that the brain is a digital computer), the psychological assumption (that the mind is an information processor following rules), the epistemological assumption (that all knowledge can be formalized), and the ontological assumption (that the world consists of independent, objective facts). I support my arguments with evidence from AI research itself, highlighting persistent setbacks encountered in areas like language translation, problem-solving, and pattern recognition. These failures, I argue, are not merely technological but reveal fundamental limitations of the digital computational model when applied to human intelligence. I emphasize the crucial role of the body, the situation, and human needs in shaping our understanding and guiding our actions, aspects that have largely been ignored by AI. I conclude by suggesting that a more fruitful approach to AI would focus on developing systems that can engage with the world in a more embodied, situated, and context-dependent way, recognizing the crucial role of ambiguity tolerance, fringe consciousness, and the ability to distinguish essential from inessential information. I advocate for ‘man-machine symbiosis’ as a more promising path, allowing us to harness the unique capabilities of both human and artificial intelligence.

Book Outline

1. Introduction

This book investigates the fundamental assumptions underlying the field of artificial intelligence, assumptions rooted in a philosophical tradition that views all knowledge as formalizable and calculable. I examine the early successes and subsequent setbacks in areas like language translation, problem-solving, and pattern recognition, questioning the prevailing assumption that human intelligence operates like a digital computer following explicit rules. I challenge the notion that the complex, nuanced, and often intuitive aspects of human understanding can be reduced to a series of formal instructions. The book is structured around a critique of four key assumptions: biological, psychological, epistemological, and ontological.

Key concept: “The difference between the mathematical mind (‘esprit de geometrie’) and the perceptive mind (‘esprit de finesse’): the reason that mathematicians are not perceptive is that they do not see what is before them, and that, accustomed to the exact and plain principles of mathematics, and not reasoning till they have well inspected and arranged their principles, they are lost in matters of perception where the principles do not allow for such arrangement. … These principles are so fine and so numerous that a very delicate and very clear sense is needed to perceive them, and to judge rightly and justly when they are perceived, without for the most part being able to demonstrate them in order as in mathematics; because the principles are not known to us in the same way, and because it would be an endless matter to undertake it. We must see the matter at once, at one glance, and not by a process of reasoning, at least to a certain degree. … Mathematicians wish to treat matters of perception mathematically, and make themselves ridiculous … the mind … does it tacitly, naturally, and without technical rules.” - Pascal, Pensees

1. Phase I (1957-1962) Cognitive Simulation

The first phase of artificial intelligence research, known as Cognitive Simulation (CS), focused on simulating human cognitive processes using heuristic programs. This approach initially showed promise in areas like game playing and problem-solving, leading to early successes with programs like the Logic Theorist and the General Problem Solver (GPS). However, these early successes were followed by significant challenges as researchers attempted to tackle more complex tasks like language translation and pattern recognition. The inherent ambiguity and context-dependence of human language, along with the difficulties in representing and accessing the vast amount of knowledge required for understanding, proved to be significant obstacles. By 1962, a pattern had emerged: early, dramatic success followed by diminishing returns, disenchantment, and in some cases, pessimism.

Key concept: “These [suggested] mechanisms are so complicated that it is impossible to predict whether they will work.”

2. Phase II (1962-1967) Semantic Information Processing

The second phase, marked by a shift towards semantic information processing, aimed to overcome the limitations of CS by developing more sophisticated representations of meaning. Programs like Bobrow’s STUDENT, Evans’ analogy program, and Quillian’s Semantic Memory Program demonstrated the potential of incorporating semantic information into AI systems. However, these programs were still limited by their dependence on restricted contexts and the programmer’s pre-structuring of the problem. The underlying challenge of representing and accessing the vast, interconnected, and context-dependent knowledge required for human-level intelligence remained unsolved. Despite the lack of significant progress, researchers in the field remained optimistic, often resorting to the “fallacy of the successful first step” – assuming that early progress guaranteed future success.

Key concept: “We simply do not know enough about how powerful Quillian’s methods would be when provided with a more substantial knowledge bank.

3. The Biological Assumption

This chapter critically examines the biological assumption: the belief that the human brain functions like a digital computer. I argue that this analogy is based on outdated models of brain function. The increasing evidence pointing to the analogue and interactive nature of neural processes suggests that the brain operates in a fundamentally different way than digital computers, challenging the idea that human intelligence can be readily replicated in a machine.

Key concept: “Now, speaking specifically of the human nervous system, this is an enormous mechanism — at least 106 times larger than any artifact with which we are familiar — and its activities are correspondingly varied and complex. … It is possible to handle all these processes by digital methods … and to process the digitalized, and usually numericized, information by algebraical (i.e., basically arithmetical) methods. This is probably the way a human designer would at present approach such a problem. The available evidence, though scanty and inadequate, rather tends to indicate that the human nervous system uses different principles and procedures.”

4. The Psychological Assumption

This chapter challenges the psychological assumption, which posits that the human mind operates like a digital computer following heuristic rules. Through a close examination of studies in cognitive simulation, I demonstrate that this approach fails to account for the flexible, context-dependent, and often intuitive aspects of human problem-solving, language use, and pattern recognition. I highlight the importance of fringe consciousness, ambiguity tolerance, and the ability to distinguish essential from inessential information — capabilities that seem to defy formalization in terms of rules. Ultimately, the success of cognitive simulation programs relies on the programmer’s implicit understanding and pre-structuring of the problem, rather than on the machine’s independent intelligence.

Key concept: “Can someone be a man’s teacher in this? Certainly. From time to time he gives him the right tip…. This is what learning and teaching are like here. … What one acquires here is not a technique; one learns correct judgments. There are also rules, but they do not form a system, and only experienced people can apply them right. Unlike calculation rules.” – Wittgenstein

5. The Epistemological Assumption

This chapter addresses the epistemological assumption, the belief that all knowledge can be formalized into explicit rules for a digital computer to follow. I argue that this assumption, rooted in a long philosophical tradition, is unwarranted when applied to human intelligence. Drawing on the work of Wittgenstein, I highlight the role of context, intuition, and judgment in human understanding, processes that resist reduction to formal rules. I further argue that even if a complete formalization of knowledge were possible, it wouldn’t necessarily enable a computer to replicate human performance, as a theory of competence (what one is able to do) is not a theory of performance (how one actually does it).

Key concept: “We are unable clearly to circumscribe the concepts we use; not because we don’t know their real definition, but because there is no real ‘definition’ to them. To suppose that there must be would be like supposing that whenever children play with a ball they play a game according to strict rules.” - Wittgenstein

6. The Ontological Assumption

This chapter tackles the ontological assumption, the belief that the world can be exhaustively analyzed into a set of determinate, independent facts. I argue that this assumption, while appealing in its simplicity and promise of control, is ultimately untenable. I draw on the work of phenomenologists like Heidegger and Merleau-Ponty to show how human experience is always situated within a context of meaning, where the relevance and significance of facts are not fixed but determined by our purposes, concerns, and embodied interaction with the world. I suggest that the difficulty artificial intelligence faces in representing and accessing knowledge stems from the misguided attempt to treat a holistic and interconnected world as a collection of isolated data points.

Key concept: “… If someone wishes to say: ‘There is something common to all these constructions — namely the disjunction of all their common properties’ — I should reply: Now you are only playing with words. One might as well say: ‘Something runs through the whole thread — namely the continuous overlapping of these fibres.” – Wittgenstein

7. The Role of the Body in Intelligent Behavior

I contend that the body plays a fundamental role in intelligent behavior, a role that cannot be easily replicated in a disembodied digital computer. Drawing on the work of Merleau-Ponty and Polanyi, I explore how our embodied skills and experiences shape our perception, understanding, and interaction with the world. I argue that the body’s capacity for flexible, context-dependent action, informed by global anticipations and ongoing feedback, is essential for navigating the complexities of the human world. The challenges faced by AI in developing robots that can perform tasks as simple as grasping a block underscore the limitations of a disembodied, rule-based approach to intelligence.

Key concept: “Although such machines could do many things as well as, or perhaps even better than men, they would infallibly fail in certain others… For while reason is a universal instrument which can be used in all sorts of situations, the organs of a machine have to be arranged in a particular way for each particular action. From this it follows that it is morally [i.e., practically] impossible that there should be enough different devices in a machine to make it behave in all the occurrences of life as our reason makes us behave.” - Descartes

8. The Situation: Orderly Behavior Without Recourse to Rules

This chapter examines the importance of the situation in understanding intelligent behavior. I argue that human beings are always already situated in a world that is pre-structured by our needs, concerns, and practices, a world that cannot be fully captured in a formal model. This ‘background’ understanding, often implicit and unarticulated, enables us to zero in on relevant information, disambiguate language, and navigate complex social interactions. The difficulties faced by AI in representing and accessing knowledge highlight the limitations of an approach that seeks to understand intelligence apart from the rich, context-dependent world in which it operates.

Key concept: “The aspects of things that are most important for us are hidden because of their simplicity and familiarity. (One is unable to notice something — because it is always before one’s eyes.)” - Wittgenstein

9. The Situation as a Function of Human Needs

I further explore the role of human needs and values in shaping the situation and guiding our actions. Drawing on the work of Kierkegaard and Todes, I argue that our needs are not pre-determined but emerge through our embodied engagement with the world. This ‘creative discovery’ of what satisfies our needs is a retroactive process, one that constantly shapes and reshapes our understanding of ourselves and the world around us. This open-ended, dynamic nature of human needs poses a challenge for AI, which relies on pre-defined objectives and utility functions to guide behavior.

Key concept: “Thus human beings do not begin with a genetic table of needs or values which they reveal to themselves as they go along. Nor, when they are authentic, do they arbitrarily adopt values which are imposed by their environment. Rather, in discovering what they need they make more specific a general need which was there all along but was not determinate.”

10. CONCLUSION: The Scope and Limits of Artificial Reason

This concluding chapter brings together the various strands of the argument to delineate the limits of artificial intelligence. I argue that the difficulties faced by AI stem not from technological limitations but from fundamental misunderstandings about the nature of human intelligence. The failure to replicate human performance in areas like language understanding, pattern recognition, and problem-solving, despite decades of research, suggests that human intelligence is not simply a matter of processing information according to rules. I propose that a more fruitful approach to AI would focus on developing systems that can engage with the world in a more embodied, situated, and context-dependent way, recognizing the crucial role of ambiguity tolerance, fringe consciousness, and the ability to distinguish essential from inessential information. However, the very nature of digital computation, with its reliance on explicit rules and determinate data, may ultimately preclude the possibility of achieving human-level intelligence in a machine. The recent trend towards ‘man-machine symbiosis’ may offer a more promising path, allowing us to harness the unique capabilities of both human and artificial intelligence.

Key concept: “Efficient machines for such problems as pattern recognition, language translation, and so on, may require a different type of computer than any we have today. It is my feeling that this will be a computer whose natural operation is in terms of patterns, concepts, and vague similarities, rather than sequential operations on ten-digit numbers.” - C.E. Shannon

Essential Questions

1. What are the fundamental differences between human and artificial intelligence, according to Dreyfus?

Dreyfus argues that human intelligence relies on four fundamental, non-programmable ‘information processing’ abilities: fringe consciousness, ambiguity tolerance, essential/inessential discrimination, and perspicuous grouping. These abilities, rooted in our embodied experience and our being-in-the-world, allow us to navigate the complexities of the human world, disambiguate language, recognize patterns, and solve problems in a flexible and context-dependent way. Dreyfus supports his argument by highlighting the persistent failures of AI research to replicate these abilities in machines, suggesting that these failures are not merely technological but reveal fundamental limitations of the digital computational model when applied to human intelligence.

2. What are the four key assumptions underlying AI research that Dreyfus critiques, and how does he challenge each one?

The four key assumptions Dreyfus critiques are: 1) the biological assumption (that the brain is a digital computer), which he refutes based on evidence from neurophysiology; 2) the psychological assumption (that the mind is an information processor following rules), which he challenges by highlighting the role of intuition and non-rule-governed behavior in human cognition; 3) the epistemological assumption (that all knowledge can be formalized), which he argues is untenable due to the role of context and tacit understanding; and 4) the ontological assumption (that the world consists of independent, objective facts), which he critiques by showing how human understanding is shaped by our embodied experience and our being-in-the-world.

3. What are the limits of artificial intelligence, and what accounts for these limitations, according to Dreyfus?

Dreyfus suggests that the success of AI has been limited to areas involving simple, formal systems (like logic or game playing with well-defined rules), where brute force computational methods can be effectively applied. However, AI has consistently struggled in areas requiring complex, nuanced, and context-dependent understanding, such as natural language processing, pattern recognition in unstructured environments, and open-ended problem-solving. These difficulties arise because human intelligence relies on our embodied experience, our understanding of the situation, and our ability to deal with ambiguity and fuzzy concepts—capacities that are difficult, if not impossible, to program into a digital computer.

4. What are the potential future directions for AI research, according to Dreyfus?

While skeptical towards the possibility of strong AI, Dreyfus acknowledges the potential of AI in areas involving well-defined, formal systems and suggests a shift towards ‘man-machine symbiosis,’ where computers augment human intelligence rather than replacing it. He envisions AI systems that assist humans in tasks like pattern recognition, language translation, and problem-solving, leveraging their computational power while relying on human judgment and intuition to handle the nuances and ambiguities of real-world situations. He also sees potential in exploring alternative computational models, such as analog computation and neural networks, which might be better suited for capturing the flexible, context-dependent nature of human intelligence.

Key Takeaways

1. Human intelligence is fundamentally embodied and situated.

Dreyfus argues that human beings are not simply information processors but embodied, situated agents whose understanding is shaped by their interactions with the world. Our bodies, our past experiences, our current needs and goals, and the social and cultural contexts in which we live all contribute to how we perceive, understand, and act in the world. This situatedness is crucial for tasks like disambiguating language, recognizing patterns, and solving problems in a flexible and adaptive way.

Practical Application:

In AI product design, it is crucial to understand that users do not approach interfaces as a collection of isolated features but as tools embedded in a specific context. Designing for user needs and goals means understanding the broader situation in which the product will be used and anticipating the user’s expectations and potential tasks. This could involve incorporating features that allow for flexible and adaptable use, accommodating ambiguity, and providing guidance based on the user’s current context and past interactions. For instance, a voice assistant could learn from past user requests to provide more relevant responses in the future, rather than simply following pre-programmed rules.

2. Fringe consciousness is crucial for human intelligence.

Dreyfus points to the role of ‘fringe consciousness’ in human intelligence—our ability to process information and respond to cues that are not directly in the focus of our attention. This peripheral awareness contributes to our understanding of the situation, helping us to zero in on relevant information and make sense of ambiguous or incomplete data.

Practical Application:

For AI product engineers, understanding the role of fringe consciousness means designing systems that can handle information that is not explicitly stated or directly available. This could involve incorporating probabilistic reasoning, allowing for uncertain or incomplete data, and developing AI systems that can learn from implicit cues and user behavior. For instance, a recommendation system could learn from a user’s past browsing history and purchase patterns to suggest relevant products, even if the user hasn’t explicitly stated their preferences.

3. Formalization of all knowledge is problematic, if not impossible.

Dreyfus challenges the notion that all knowledge can be formalized into explicit rules. He argues that much of our understanding relies on tacit knowledge, intuition, and judgment—capacities that are difficult, if not impossible, to program into a computer. This has implications for the feasibility of developing AI systems that can truly understand and reason about the world in the same way humans do.

Practical Application:

In developing AI systems, it is essential to acknowledge that a complete formalization of knowledge is unlikely. This means focusing on developing AI systems that can learn and adapt to new situations and contexts, rather than relying solely on pre-programmed rules. It also suggests the need for collaboration between humans and AI systems, leveraging the strengths of each. For example, in medical diagnosis, AI systems could analyze large datasets and suggest potential diagnoses, but the final decision would still involve human judgment and interpretation of the nuanced context of each individual case.

Suggested Deep Dive

Chapter: The Situation: Orderly Behavior Without Recourse to Rules (Chapter 8)

This chapter delves into Dreyfus’s concept of the ‘situation’ and its crucial role in shaping intelligent behavior. This concept offers a valuable lens for understanding the limitations of current AI systems and the importance of context in developing truly intelligent machines. Understanding this chapter is key to grasping Dreyfus’s central argument and its implications for the future of AI.

Memorable Quotes

Introduction. 13

“The difference between the mathematical mind (‘esprit de geometrie’) and the perceptive mind (‘esprit de finesse’): the reason that mathematicians are not perceptive is that they do not see what is before them, and that, accustomed to the exact and plain principles of mathematics, and not reasoning till they have well inspected and arranged their principles, they are lost in matters of perception where the principles do not allow for such arrangement. … These principles are so fine and so numerous that a very delicate and very clear sense is needed to perceive them, and to judge rightly and justly when they are perceived, without for the most part being able to demonstrate them in order as in mathematics; because the principles are not known to us in the same way, and because it would be an endless matter to undertake it. We must see the matter at once, at one glance, and not by a process of reasoning, at least to a certain degree. … Mathematicians wish to treat matters of perception mathematically, and make themselves ridiculous … the mind … does it tacitly, naturally, and without technical rules.” - Pascal, Pensees

Introduction. 23

“Can someone be a man’s teacher in this? Certainly. From time to time he gives him the right tip…. This is what learning and teaching are like here. … What one acquires here is not a technique; one learns correct judgments. There are also rules, but they do not form a system, and only experienced people can apply them right. Unlike calculation rules.” – Wittgenstein

Introduction. 37

“We see a complicated network of similarities overlapping and criss-crossing: Sometimes overall similarities, sometimes similarities of detail. I can think of no better expression to characterize these similarities than “family resemblances”; for the various resemblances between members of a family: build, features, color of eyes, gait, temperament, etc. etc., overlap and criss-cross in the same way. … We extend our concept … as in spinning a thread we twist fiber on fiber.” – Wittgenstein

The Epistemological Assumption. 50

“We are unable clearly to circumscribe the concepts we use; not because we don’t know their real definition, but because there is no real ‘definition’ to them. To suppose that there must be would be like supposing that whenever children play with a ball they play a game according to strict rules.” - Wittgenstein

The Situation: Orderly Behavior Without Recourse to Rules. 174

“The aspects of things that are most important for us are hidden because of their simplicity and familiarity. (One is unable to notice something — because it is always before one’s eyes.)” - Wittgenstein

Comparative Analysis

Dreyfus’s work stands in stark contrast to the prevailing optimism of early AI researchers like Allen Newell, Herbert Simon, and Marvin Minsky, who believed in the imminent arrival of human-level artificial intelligence. His critique aligns with the views of philosophers like Wittgenstein and Heidegger, who emphasized the importance of context, embodiment, and the limits of formalization in understanding human intelligence. Dreyfus’s focus on the body’s role in intelligence foreshadows later developments in embodied cognition and robotics, while his critique of the ontological assumption resonates with contemporary discussions on the limitations of big data and the importance of meaning-making. While Dreyfus’s skepticism towards ‘strong AI’ remains controversial, his work has profoundly influenced subsequent discussions on the nature of intelligence and the potential of AI, pushing researchers to consider alternative approaches that go beyond rule-based symbol manipulation.

Reflection

Dreyfus’s ‘What Computers Can’t Do’ remains a seminal work in the philosophy of AI, prompting critical reflection on the nature of intelligence and the limitations of computational models. His insights, while initially met with resistance from the AI community, have proven to be prescient, foreshadowing many of the challenges AI continues to face in areas like natural language understanding and robotics. However, it is important to acknowledge that the field of AI has evolved significantly since Dreyfus’s initial critique. While his arguments against the possibility of achieving human-level artificial intelligence through rule-based symbol manipulation remain relevant, advancements in areas like machine learning, deep learning, and neural networks have led to impressive achievements in AI, particularly in pattern recognition, data analysis, and problem-solving in specific domains. Despite these advancements, Dreyfus’s central argument—that human intelligence is fundamentally embodied and situated, relying on tacit knowledge and an understanding of the world that goes beyond explicit rules—continues to resonate. His work serves as a reminder to approach AI development with humility, recognizing the complexities of human intelligence and the potential pitfalls of a purely computational approach.

Flashcards

What is the biological assumption in AI?

The belief that the human brain processes information like a digital computer, using discrete operations and a biological equivalent of on/off switches.

What is the psychological assumption in AI?

The belief that the human mind can be understood as a device that operates on bits of information according to formal rules, like a computer program.

What is the epistemological assumption in AI?

The belief that all knowledge can be formalized, meaning it can be expressed in terms of logical relations and Boolean functions, much like a computer program.

What is the ontological assumption in AI?

The belief that all relevant information about the world can be analyzed as a set of independent, context-free, and determinate elements, like data points for a computer.

What is ‘zeroing in’ in the context of human problem-solving?

The ability to focus attention on relevant areas of a problem or situation without explicitly considering all possibilities. This often involves an intuitive grasp of the overall structure or pattern.

What is Cognitive Simulation (CS)?

A type of programming that attempts to simulate human cognitive processes by incorporating rules of thumb, or heuristics, that people use when solving problems.

What is Artificial Intelligence (AI), in the narrower sense Dreyfus uses it?

The attempt to simulate human intelligent behavior using programming techniques that do not necessarily resemble human mental processes.

What is ambiguity tolerance?

The ability to understand language without having to resolve all ambiguities by relying on context, background knowledge, and a sense of the situation.

What is perspicuous grouping?

A kind of holistic understanding that arises from our embodied experience and allows us to perceive objects and situations as meaningful wholes, rather than collections of isolated features.